15 research outputs found

    An Effective Multi-Cue Positioning System for Agricultural Robotics

    Get PDF
    The self-localization capability is a crucial component for Unmanned Ground Vehicles (UGV) in farming applications. Approaches based solely on visual cues or on low-cost GPS are easily prone to fail in such scenarios. In this paper, we present a robust and accurate 3D global pose estimation framework, designed to take full advantage of heterogeneous sensory data. By modeling the pose estimation problem as a pose graph optimization, our approach simultaneously mitigates the cumulative drift introduced by motion estimation systems (wheel odometry, visual odometry, ...), and the noise introduced by raw GPS readings. Along with a suitable motion model, our system also integrates two additional types of constraints: (i) a Digital Elevation Model and (ii) a Markov Random Field assumption. We demonstrate how using these additional cues substantially reduces the error along the altitude axis and, moreover, how this benefit spreads to the other components of the state. We report exhaustive experiments combining several sensor setups, showing accuracy improvements ranging from 37% to 76% with respect to the exclusive use of a GPS sensor. We show that our approach provides accurate results even if the GPS unexpectedly changes positioning mode. The code of our system along with the acquired datasets are released with this paper.Comment: Accepted for publication in IEEE Robotics and Automation Letters, 201

    Building an Aerial-Ground Robotics System for Precision Farming: An Adaptable Solution

    Full text link
    The application of autonomous robots in agriculture is gaining increasing popularity thanks to the high impact it may have on food security, sustainability, resource use efficiency, reduction of chemical treatments, and the optimization of human effort and yield. With this vision, the Flourish research project aimed to develop an adaptable robotic solution for precision farming that combines the aerial survey capabilities of small autonomous unmanned aerial vehicles (UAVs) with targeted intervention performed by multi-purpose unmanned ground vehicles (UGVs). This paper presents an overview of the scientific and technological advances and outcomes obtained in the project. We introduce multi-spectral perception algorithms and aerial and ground-based systems developed for monitoring crop density, weed pressure, crop nitrogen nutrition status, and to accurately classify and locate weeds. We then introduce the navigation and mapping systems tailored to our robots in the agricultural environment, as well as the modules for collaborative mapping. We finally present the ground intervention hardware, software solutions, and interfaces we implemented and tested in different field conditions and with different crops. We describe a real use case in which a UAV collaborates with a UGV to monitor the field and to perform selective spraying without human intervention.Comment: Published in IEEE Robotics & Automation Magazine, vol. 28, no. 3, pp. 29-49, Sept. 202

    D2CO: Fast and Robust Registration of 3D Textureless Objects Using the Directional Chamfer Distance

    No full text
    This paper introduces a robust and efficient vision based method for object detection and 3D pose estimation that exploits a novel edge-based registration algorithm we called Direct Directional Chamfer Optimization (D2CO). Our approach is able to handle textureless and partially occluded objects and does not require any off-line object learning step. Depth edges and visible patterns extracted from the 3D CAD model of the object are matched against edges detected in the current grey level image by means of a 3D distance transform represented by an image tensor, that encodes the minimum distance to an edge point in a joint direction/location space. D2CO refines the object position employing a non-linear optimization procedure, where the cost being minimized is extracted directly from the 3D image tensor. Differently from other popular registration algorithms as ICP, that require to constantly update the correspondences between points, our approach does not require any iterative re-association step: the data association is implicitly optimized while inferring the object position. This enables D2CO to obtain a considerable gain in speed over other registration algorithms while presenting a wider basin of convergence. We tested our system with a set of challenging untextured objects in presence of occlusions and cluttered background, showing accurate results and often outperforming other state-of-the-art methods

    FlexSight - A Flexible and Accurate System for Object Detection and Localization for Industrial Robots

    No full text
    We present a novel smart camera - the FlexSight C1 - designed to enable an industrial robot to detect and localize several types of objects and parts in an accurate and reliable way. The C1 integrates all the sensors and a powerful mini computer with a complete Operating System running robust 3D reconstruction and object localization algorithms on-board, so it can be directly connected to the robot that is guided directly by the device during the production cycle without any external computers in the loop. In this paper, we describe the FlexSight C1 hardware configuration along with the algorithms designed to face the model based localization problem of textureless objects, namely: (1) an improved version of the PatchMatch Stereo matching algorithm for depth estimation; (2) an object detection pipeline based on deep transfer learning with synthetic data. All the presented algorithms have been tested on publicly available datasets, showing effective results and improved runtime performance

    Machine Vision for Embedded Devices: from Synthetic Object Detection to Pyramidal Stereo Matching

    No full text
    In this work we present an embedded and all-in-one system for machine vision in industrial settings. This system enhances the capabilities of an industrial robot providing vision and perception, e.g. deep learning based object detection and 3D reconstruction by mean of efficient and highly scalable stereo matching. To this purpose we implemented and tested innovative solutions for object detection based on synthetically trained deep networks and a novel approach for depth estimation that embeds traditional 3D stereo matching within a pyramidal framework in order to reduce the computation time. Both object detection and 3D stereo matching have been efficiently implemented on the embedded device. Results and performance of the implementations are given for publicly available datasets, in particular the T-Less dataset for textureless object detection, Kitti Stereo and Middlebury Stereo datasets for depth estimation

    Learning to Segment Human Body Parts with Synthetically Trained Deep Convolutional Networks

    Full text link
    This paper presents a new framework for human body part segmentation based on Deep Convolutional Neural Networks trained using only synthetic data. The proposed approach achieves cutting-edge results without the need of training the models with real annotated data of human body parts. Our contributions include a data generation pipeline, that exploits a game engine for the creation of the synthetic data used for training the network, and a novel pre-processing module, that combines edge response maps and adaptive histogram equalization to guide the network to learn the shape of the human body parts ensuring robustness to changes in the illumination conditions. For selecting the best candidate architecture, we perform exhaustive tests on manually annotated images of real human body limbs. We further compare our method against several high-end commercial segmentation tools on the body parts segmentation task. The results show that our method outperforms the other models by a significant margin. Finally, we present an ablation study to validate our pre-processing module. With this paper, we release an implementation of the proposed approach along with the acquired datasets.Comment: Submitted to the 16th International Conference on Intelligent Autonomous System (IAS

    Robotics for Precision Agriculture @DIAG

    No full text
    Flourish is a recent H2020 project, whose aim was to develop a multi-platform robotic solution for precision agriculture, combining a micro UAV and a UGV. The aim of this document is to sketch the contribution of Sapienza Univ. of Rome in the context of the Flourish project, as well as the current follow-up activities in precision agriculture

    Pushing the Limits of Learning-based Traversability Analysis for Autonomous Driving on CPU

    Full text link
    Self-driving vehicles and autonomous ground robots require a reliable and accurate method to analyze the traversability of the surrounding environment for safe navigation. This paper proposes and evaluates a real-time machine learning-based Traversability Analysis method that combines geometric features with appearance-based features in a hybrid approach based on a SVM classifier. In particular, we show that integrating a new set of geometric and visual features and focusing on important implementation details enables a noticeable boost in performance and reliability. The proposed approach has been compared with state-of-the-art Deep Learning approaches on a public dataset of outdoor driving scenarios. It reaches an accuracy of 89.2% in scenarios of varying complexity, demonstrating its effectiveness and robustness. The method runs fully on CPU and reaches comparable results with respect to the other methods, operates faster, and requires fewer hardware resources.Comment: Accepted to 17th International Conference on Intelligent Autonomous Systems (IAS-17
    corecore